The Dark Side of the "Republic of Deepfakes"
I've spent months investigating one of the most disturbing digital crimes targeting the entertainment industry today. South Korea—home to the world's fastest internet and a tech-savvy population—now carries an unwanted label: the "Republic of Deepfakes."
Here's what shocked me most: according to Security Hero's 2023 report, 53% of all deepfake pornography victims globally are Korean, with 8 of the top 10 most-targeted individuals being K-pop stars. This isn't just a celebrity scandal—it's a systematic digital assault on women that has spread from idol stages to school classrooms and military barracks.
Which K-pop Idols Have Been Victimized?
Blackpink: Global Fame, Global Targeting
Blackpink members, particularly Lisa and Jennie, rank among the most frequently targeted individuals on international deepfake sites. Criminals use GAN (Generative Adversarial Networks) to superimpose their faces onto explicit content, which then circulates through Telegram groups and dark web forums.
YG Entertainment's Response: In September 2024, YG announced a "zero-tolerance policy" covering both Blackpink and BABYMONSTER. They've hired digital monitoring firms using AI counter-tracking technology and stated publicly: "We will pursue all possible legal action, including criminal prosecution."
TWICE: Cross-Border Digital Harassment
TWICE faces unique challenges due to their popularity across Korea, Japan, and Southeast Asia. Their wholesome image makes them targets for deliberately degrading content. I've also documented cases where AI voice-cloning technology has been used to impersonate members for fraud schemes.
JYP Entertainment's Response: JYP issued stern legal warnings in late 2024, emphasizing that these actions constitute sexual violence, not merely copyright infringement. They've partnered with cyber police and established evidence collection through fan reporting channels.
IU: When Classmates Become Perpetrators
IU's case reveals something deeply troubling about deepfake crimes. After years of online harassment, EDAM Entertainment filed criminal complaints against over 180 individuals—including one of IU's former middle school classmates. This shatters the assumption that perpetrators are anonymous strangers.
EDAM's Policy: "No settlement, no forgiveness." As of early 2025, multiple defendants have received substantial fines or probation. The company continues tracking those attempting to evade prosecution through overseas VPNs.
aespa: The Irony of Metaverse Idols
There's a bitter irony in aespa's situation. As a group built on "metaverse" and "AI avatar" concepts, members like Winter and Karina face exploitation that blurs the line between their virtual and real identities. Malicious accounts on X (Twitter) spread synthetic photos alongside fabricated scandals.
SM Entertainment's Response: SM launched "KWANGYA 119," a dedicated reporting platform. In early 2026, they filed criminal complaints against 11 accounts and took the rare step of publicly naming offenders—a "name and shame" strategy unprecedented in K-pop crisis management.
NewJeans: Protecting Underage Idols
NewJeans' case raises the most serious concerns. Content targeting their underage members has circulated in Telegram rooms organized by school names, crossing the line into Child Sexual Abuse Material (CSAM).
HYBE/ADOR's Breakthrough: In April 2025, following an MOU with Gyeonggi Northern Police, authorities arrested 8 individuals for creating and distributing deepfakes of HYBE artists. This marked the first large-scale coordinated takedown between an entertainment company and police.
Other Notable Cases
- MOMOLAND Nancy: Secretly filmed changing room footage was manipulated into nude content, combining illegal filming ("molka") with deepfake technology. This case influenced legislation discussions in the US.
- IVE Social Media Incident: In January 2024, a Starship Entertainment employee accidentally posted a deepfake image of Jang Wonyoung to the official Weibo account—exposing how even professionals struggle to identify synthetic content.
The Numbers: How Widespread Is This Crisis?
Global Data
- 53% of deepfake porn victims are Korean
- 8 of 10 most-targeted individuals globally are K-pop artists
- Nearly 100,000 new videos were uploaded to top deepfake sites in just two months (July-August 2023)
South Korea's Internal Statistics
- Requests for help regarding digital sexual violence reached 8,983 in 2023—up 12.6% year-over-year
- Police cases rose from 156 (2021) to 964 (October 2024)
- KCSC removed 23,107 "fake video products" in 2024—3.2x more than 2023
The Telegram Crisis: 227,000 Users in a "Deepfake Factory"
In August 2024, investigators exposed a Telegram bot channel with 227,000 subscribers. The operation was chillingly simple:
- Upload any woman's photo
- Receive a nude deepfake in seconds
- First two images free; pay or recruit friends for more
This "freemium" model turned digital sexual violence into a viral social activity. Groups organized by tags like "K-pop girl groups," "female soldiers," "nurses," and specific school names created a culture of humiliation where participants competed to degrade women they knew personally.
The "Victim School Map" That Shocked the Nation
On August 27, 2024, a terrified middle school student created an interactive map showing which schools had dedicated deepfake Telegram groups. Within days, over 500 schools were marked as affected. The map went viral, triggering a national panic.
The response? Countless women and girls frantically deleted all photos from their social media—a desperate "digital exodus" to cut off AI training material. The military even removed all personnel photos from their internal network.
Timeline: From "Nth Room" to AI Deepfakes
| Date | Event |
|---|---|
| 2019-2020 | "Nth Room" scandal establishes Telegram as digital sex crime hub |
| June 2020 | Korea's first deepfake law—but watching/possessing remains legal |
| Late 2023 | Security Hero report reveals 53% Korean victim rate |
| January 2024 | Starship accidentally posts IVE deepfake |
| August 2024 | 227K-user Telegram bot exposed; victim school map released |
| September 2024 | President Yoon orders crackdown; 7-month police operation begins |
| September 30, 2024 | Telegram apologizes to Korean regulators, removes content |
| October 2024 | Seoul National University deepfake ring leader sentenced to 10 years |
| March 2025 | Police operation concludes; hundreds arrested |
| April 2025 | 8 arrested in HYBE-police coordinated action |
Legal Response: Closing the Loopholes
The Critical 2024-2025 Amendments
Before these reforms, South Korea's laws had a devastating gap: merely watching deepfake porn was legal. The amended Sexual Violence Punishment Act changed everything:
- Viewing/possession now criminal: Up to 3 years imprisonment or 30 million won ($22,600) fine
- Increased penalties for creation/distribution: Maximum sentence raised from 5 to 7 years
- Undercover investigation powers: Police can now infiltrate Telegram groups even when victims are adults
- No "non-commercial" excuse: Penalties apply regardless of profit motive
Enforcement Reality
Of 506 suspects arrested in the initial crackdown, 81% were teenagers (ages 10-19). This sparked national debate about lowering the age of criminal responsibility and implementing mandatory digital ethics education.
How to Report K-pop Deepfakes in South Korea
Government Channels
-
Digital Sex Crime Victim Support Center
- Phone: 02-735-8994
- Website: d4u.stop.or.kr
- Services: Counseling, legal aid, content deletion support
-
Korea Communications Standards Commission (KCSC)
- Can block URLs within Korea
- Accepts public reports of illegal content
-
National Police Agency Cyber Bureau
- Emergency: 112 (multilingual support)
- For cases involving threats or extortion
Entertainment Company Portals
- SM Entertainment: kwangya119.smtown.com (requires SMTOWN account)
- HYBE: protect.hybecorp.com (covers all subsidiaries)
- YG Entertainment: Internal monitoring with public tip channels
What's Being Done to Protect K-pop Idols?
1. Zero-Tolerance, No-Settlement Policies
All major agencies—HYBE, SM, JYP, YG—have abandoned the old practice of quiet settlements. Every deepfake case now goes to criminal prosecution, regardless of the perpetrator's age.
2. AI vs. AI
Companies are deploying counter-AI technology that scans the web 24/7, automatically detecting synthetic content featuring their artists and issuing DMCA takedowns. Some are exploring invisible digital watermarks in official content to enable tracing.
3. Police Partnerships
The February 2025 HYBE-police MOU created a direct hotline and streamlined evidence procedures. Within two months, 8 arrests followed—a model other companies are likely to adopt.
4. Platform Pressure
Korean agencies and government have pressured YouTube, X, and Instagram to update policies. YouTube now allows removal requests for AI-generated face/voice content. SM Entertainment has used KWANGYA 119 data for coordinated mass complaints to X.
Why This Matters Beyond K-pop
A Gender War Gone Digital
The deepfake crisis is the digital front of Korea's intense gender conflict. Perpetrators—overwhelmingly young men—use AI as a power tool to humiliate and control women, from celebrities to classmates. This is misogyny amplified by technology.
Ordinary Women Under Threat
The "victim school map" proved that idols are just the entry point. Once perpetrators master the technology, they target accessible victims. When high school girls delete all social media photos out of fear, they're being forced out of digital public life—a violation of social participation rights.
National Security Implications
When deepfakes infiltrate the military—with female soldiers' ID photos converted into explicit content—this becomes a national security issue. Such material could be weaponized for blackmail, morale destruction, or psychological warfare.
The Collapse of Trust
As one Korean sociologist observed: "Mutual trust has completely collapsed." Students distrust classmates. Female soldiers distrust colleagues. Women distrust the internet itself. Repairing this social trauma will take far longer than deleting videos.
Frequently Asked Questions
Q: Can international fans face prosecution for K-pop deepfakes?
Yes, risk is increasing. Korean agencies are collecting global evidence and cooperating with international law enforcement. If your country has cyber crime treaties with Korea, you may face investigation. Social media accounts will almost certainly be banned.
Q: Can you really go to jail just for watching deepfakes in Korea?
Yes. Under the 2024 amendments, possessing, purchasing, storing, or viewing deepfake sexual imagery is punishable by up to 3 years imprisonment or 30 million won fine. "I only watched, didn't share" is no longer a valid defense.
Q: Why are so many perpetrators teenagers?
Experts cite three factors: (1) high digital literacy combined with low ethical constraints; (2) gamification through "invite friends for free credits" mechanics; (3) peer pressure in male chat groups where creating such content is seen as "cool."
Q: What is the "Delete Children" movement?
Originally a child privacy advocacy campaign, it became a desperate self-protection strategy during the crisis. Many parents, teachers, and minors proactively deleted all clear facial photos online to deny AI "training material."
Conclusion
The K-pop deepfake crisis represents a collision between democratized AI technology and deeply rooted gender inequality. While legal reforms and industry responses have accelerated, the fundamental challenge remains: in an era where algorithms can strip human dignity in seconds, how do we reclaim control?
The battle is far from over—but for the first time, perpetrators are learning that anonymity no longer guarantees impunity.
Sources
- Chosun Ilbo: "53% of Global deepfake victims are Korean"
- Security Hero: 2023 Deepfake Status Report
- Michigan State International Law Review: "That Isn't Me!: South Korea's Deepfake Porn Crisis"
- PBS NewsHour: "Rise of explicit deepfakes wrecks women's lives"
- Music Business Worldwide: "HYBE deepfake crackdown sees eight arrested"
- The Guardian: "South Korea battles surge of deepfake pornography"
- Courthouse News: "South Korea fights deepfake porn with tougher punishment"
- OECD.AI Incident Database: Multiple agency legal actions

